[TechM] Desh Deepak Yadav — Vibe Coding Submission#4
[TechM] Desh Deepak Yadav — Vibe Coding Submission#4deshdeepakyadav wants to merge 4 commits intonasscomAI:mainfrom
Conversation
|
👋 Hi there, participant! Thanks for joining our Vibe Coding Session! We're reviewing your PR for the 4 User Cases. Once your submission is validated and merged, you'll be awarded your completion badge! 🏆 Next Steps:
|
|
👋 Hi there, participant! Thanks for joining our Vibe Coding Session! We're reviewing your PR for the 4 User Cases. Once your submission is validated and merged, you'll be awarded your completion badge! 🏆 Next Steps:
|
|
👋 Hi there, participant! Thanks for joining our Vibe Coding Session! We're reviewing your PR for the 4 User Cases. Once your submission is validated and merged, you'll be awarded your completion badge! 🏆 Next Steps:
|
|
👋 Hi there, participant! Thanks for joining our Vibe Coding Session! We're reviewing your PR for the 4 User Cases. Once your submission is validated and merged, you'll be awarded your completion badge! 🏆 Next Steps:
|
Vibe Coding Workshop — Submission PR
Name: Desh Deepak Yadav
TechM Group: TechM Vibe Coding Workshop
Date: 20 Apr 2026
AI tool(s) used: Github Copilot
Checklist — Complete Before Opening This PR
agents.mdcommitted for all 4 UCsskills.mdcommitted for all 4 UCsclassifier.pyruns ontest_[city].csvwithout crashresults_[city].csvpresent inuc-0a/app.pyfor UC-0B, UC-0C, UC-X — all run without crashsummary_hr_leave.txtpresent inuc-0b/growth_output.csvpresent inuc-0c/UC-0A — Complaint Classifier
Which failure mode did you encounter first?
(taxonomy drift / severity blindness / missing justification / hallucinated sub-categories / false confidence)
What enforcement rule fixed it? Quote the rule exactly as it appears in your agents.md:
How many rows in your results CSV match the answer key?
(Tutor will release answer key after session)
Did all severity signal rows (injury/child/school/hospital) return Urgent?
Your git commit message for UC-0A:
UC-0B — Summary That Changes Meaning
Which failure mode did you encounter?
(clause omission / scope bleed / obligation softening)
List any clauses that were missing or weakened in the naive output (before your RICE fix):
After your fix — are all 10 critical clauses present in summary_hr_leave.txt?
Did the naive prompt add any information not in the source document (scope bleed)?
Your git commit message for UC-0B:
UC-0C — Number That Looks Right
What did the naive prompt return when you ran "Calculate growth from the data."?
Did it aggregate across all wards? Did it mention the 5 null rows?
After your fix — does your system refuse all-ward aggregation?
Does your growth_output.csv flag the 5 null rows rather than skipping them?
Does your output match the reference values (Ward 1 Roads +33.1% in July, −34.8% in October)?
Your git commit message for UC-0C:
UC-X — Ask My Documents
What did the naive prompt return for the cross-document test question?
(Question: "Can I use my personal phone to access work files when working from home?")
Did it blend the IT and HR policies?
After your fix — what does your system return for this question?
Did your system use any hedging phrases in any answer?
("while not explicitly covered", "typically", "generally understood")
Did all 7 test questions produce either a single-source cited answer or the exact refusal template?
Your git commit message for UC-X:
CRAFT Loop Reflection
Which CRAFT step was hardest across all UCs, and why?
What is the single most important thing you added manually to an agents.md that the AI did not generate on its own?
Name one real task in your work where you will apply RICE + CRAFT within the next two weeks:
Reviewer Notes (tutor fills this section)
Badge decision: